57 research outputs found

    Logarithmic and Riesz Equilibrium for Multiple Sources on the Sphere --- the Exceptional Case

    Full text link
    We consider the minimal discrete and continuous energy problems on the unit sphere Sd\mathbb{S}^d in the Euclidean space Rd+1\mathbb{R}^{d+1} in the presence of an external field due to finitely many localized charge distributions on Sd\mathbb{S}^d, where the energy arises from the Riesz potential 1/rs1/r^s (rr is the Euclidean distance) for the critical Riesz parameter s=d2s = d - 2 if d3d \geq 3 and the logarithmic potential log(1/r)\log(1/r) if d=2d = 2. Individually, a localized charge distribution is either a point charge or assumed to be rotationally symmetric. The extremal measure solving the continuous external field problem for weak fields is shown to be the uniform measure on the sphere but restricted to the exterior of spherical caps surrounding the localized charge distributions. The radii are determined by the relative strengths of the generating charges. Furthermore, we show that the minimal energy points solving the related discrete external field problem are confined to this support. For d2s<dd-2\leq s<d, we show that for point sources on the sphere, the equilibrium measure has support in the complement of the union of specified spherical caps about the sources. Numerical examples are provided to illustrate our results.Comment: 23 pages, 4 figure

    Direct and Inverse Results on Bounded Domains for Meshless Methods via Localized Bases on Manifolds

    Full text link
    This article develops direct and inverse estimates for certain finite dimensional spaces arising in kernel approximation. Both the direct and inverse estimates are based on approximation spaces spanned by local Lagrange functions which are spatially highly localized. The construction of such functions is computationally efficient and generalizes the construction given by the authors for restricted surface splines on Rd\mathbb{R}^d. The kernels for which the theory applies includes the Sobolev-Mat\'ern kernels for closed, compact, connected, CC^\infty Riemannian manifolds.Comment: 29 pages. To appear in Festschrift for the 80th Birthday of Ian Sloa

    Splines and Wavelets on Geophysically Relevant Manifolds

    Full text link
    Analysis on the unit sphere S2\mathbb{S}^{2} found many applications in seismology, weather prediction, astrophysics, signal analysis, crystallography, computer vision, computerized tomography, neuroscience, and statistics. In the last two decades, the importance of these and other applications triggered the development of various tools such as splines and wavelet bases suitable for the unit spheres S2\mathbb{S}^{2}, S3\>\>\mathbb{S}^{3} and the rotation group SO(3)SO(3). Present paper is a summary of some of results of the author and his collaborators on generalized (average) variational splines and localized frames (wavelets) on compact Riemannian manifolds. The results are illustrated by applications to Radon-type transforms on Sd\mathbb{S}^{d} and SO(3)SO(3).Comment: The final publication is available at http://www.springerlink.co

    The impact of Stieltjes' work on continued fractions and orthogonal polynomials

    Full text link
    Stieltjes' work on continued fractions and the orthogonal polynomials related to continued fraction expansions is summarized and an attempt is made to describe the influence of Stieltjes' ideas and work in research done after his death, with an emphasis on the theory of orthogonal polynomials

    On Smooth Activation Functions

    No full text

    Convergence of Lagrange Interpolation for Freud Weights in Weighted L p (ℝ), 0 <P ≤ 1

    No full text

    Deep vs. shallow networks: An approximation theory perspective

    No full text
    © 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning

    An analysis of training and generalization errors in shallow and deep networks

    No full text
    © 2019 Elsevier Ltd This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data
    corecore